Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Deep subspace clustering based on multiscale self-representation learning with consistency and diversity
Zhuo ZHANG, Huazhu CHEN
Journal of Computer Applications    2024, 44 (2): 353-359.   DOI: 10.11772/j.issn.1001-9081.2023030275
Abstract192)   HTML14)    PDF (1205KB)(190)       Save

Deep Subspace Clustering (DSC) is based on the assumption that the original data lies in a collection of low-dimensional nonlinear subspaces. In the multi-scale representation learning methods for deep subspace clustering, based on deep auto-encoder, fully connected layers are added between the encoder and the corresponding decoder for each layer to capture multi-scale features, without deeply analyzing the nature of multi-scale features and considering the multi-scale reconstruction loss between input data and output data. In order to solve the above problems, firstly, the reconstruction loss function of each network layer was established to supervise the learning of encoder parameters at different levels; then, a more effective multi-scale self-representation module was proposed based on the block diagonality of the sum of the common self-representation matrix and the unique self-representation matrices for multi-scale features; finally, the diversity of unique self-representation matrices for different scale features was analyzed in depth and the multi-scale feature matrices were used effectively. On this basis, an MSCD-DSC (Multiscale Self-representation learning with Consistency and Diversity for Deep Subspace Clustering) method was proposed. Experimental results on the datasets Extended Yale B, ORL, COIL20 and Umist show that, compared to the suboptimal method MLRDSC (Multi-Level Representation learning for Deep Subspace Clustering), the clustering error rate of MSCD-DSC is reduced by 15.44%, 2.22%, 3.37%, and 13.17%, respectively, indicating that the clustering effect of MSCD-DSC is better than those of the existing methods.

Table and Figures | Reference | Related Articles | Metrics
Improved subspace clustering model based on spectral clustering
Ran GAO, Huazhu CHEN
Journal of Computer Applications    2021, 41 (12): 3645-3651.   DOI: 10.11772/j.issn.1001-9081.2021010081
Abstract317)   HTML9)    PDF (1431KB)(118)       Save

The purpose of subspace clustering is to segment data from different subspaces into the corresponding low-dimensional subspaces which the data essentially belong to. The existing methods based on data self-representation and spectral clustering divide this problem into two consecutive stages: first, the affinity matrix of the data was learned from the high-dimensional data, and then the cluster membership of the data was inferred by applying spectral clustering to the learned affinity matrix. A new data adaptive sparse regularization term was defined and combined with Structural Sparse Subspace Clustering (SSSC) model and improved Sparse Spectral Clustering (SSpeC) model, and a new unified optimization model was proposed. In the new model, by using the mutual guidance of data similarity and clustering indicators, the blindness of SSpeC sparsity penalty was overcome and the similarity was made to be discriminative, which was conducive to dividing the data from different subspaces into different classes, and the defect that the SSSC model only forces the data from the same subspace to have the same labels was made up. Experimental results on common datasets show that the proposed model enhances the ability of clustering discrimination and is superior to some classical two-stage methods and SSSC model.

Table and Figures | Reference | Related Articles | Metrics